Conversation
…s in default llm class before trying to resolve it
Noravee
commented
Jul 25, 2025
| from neuro_san.internals.run_context.langchain.util.argument_validator import ArgumentValidator | ||
| from neuro_san.internals.utils.resolver_util import ResolverUtil | ||
|
|
||
| DEFAULT_LLM_CLASSES: Set[str] = {"anthropic", "azure-openai", "bedrock", "gemini", "ollama", "openai", "nvidia"} |
Collaborator
There was a problem hiding this comment.
Why not get these from the keys listed in default_llm_info.hocon "classes" dictionary?
Then when we add more we won't have 2 places to update.
Noravee
commented
Jul 25, 2025
| if ( | ||
| llm is None | ||
| and found_exception is not None | ||
| and class_path not in DEFAULT_LLM_CLASSES |
Collaborator
Author
There was a problem hiding this comment.
Check that class is not in the default classes.
d1donlydfink
requested changes
Jul 25, 2025
| from neuro_san.internals.run_context.langchain.util.argument_validator import ArgumentValidator | ||
| from neuro_san.internals.utils.resolver_util import ResolverUtil | ||
|
|
||
| DEFAULT_LLM_CLASSES: Set[str] = {"anthropic", "azure-openai", "bedrock", "gemini", "ollama", "openai", "nvidia"} |
Collaborator
There was a problem hiding this comment.
Why not get these from the keys listed in default_llm_info.hocon "classes" dictionary?
Then when we add more we won't have 2 places to update.
Noravee
commented
Jul 25, 2025
| # and factory resolution failed. | ||
| class_path: str = config.get("class") | ||
| if llm is None and found_exception is not None and class_path: | ||
| default_llm_classes: Set[str] = set(self.llm_infos.get("classes")) |
Collaborator
Author
There was a problem hiding this comment.
Get the default llm classes from the default llm info file.
Thank you @d1donlydfink for the suggestion.
d1donlydfink
approved these changes
Jul 25, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR improves error handling during LLM instantiation when factory-based resolution fails.
Problem:
When all
llm_factoriesfail to create an LLM, we fallback to using the"class"value in the config to attempt instantiation via a user-defined class. However,"class"is always set — even for factory-backed models like"openai"or"bedrock".If we blindly attempt to resolve these default classes again via
create_base_chat_model_from_user_class, it can raise a secondary error that masks the original factory error, making debugging much harder.Fix:
Add a check to skip fallback resolution if
config["class"]is one of the default classes.This ensures we preserve and raise the original, meaningful exception instead of a misleading one from reprocessing default class paths.